Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Generative adversarial network synthesized face detection based on deep alignment network
TANG Guihua, SUN Lei, MAO Xiuqing, DAI Leyu, HU Yongjin
Journal of Computer Applications    2021, 41 (7): 1922-1927.   DOI: 10.11772/j.issn.1001-9081.2020081214
Abstract335)      PDF (1450KB)(309)       Save
The existing Generative Adversarial Network (GAN) synthesized face detection method has misjudgment of real faces with angles and occlusion, therefor a GAN-synthesized face detection method based on Deep Alignment Network (DAN) was proposed. Firstly, a facial landmark extraction network was designed based on DAN to extract the locations of facial landmarks of genuinus and synthesized faces. Then, in order to reduce the redundant information and feature dimensionality, each group of landmarks was mapped to the three-dimensional space by using the Principal Component Analysis (PCA) method. Finally, the features were classified by using 5-fold cross-validation of Support Vector Machine and the accuracy was calculated. Experimental results show that the proposed method improves the face dissonance caused by location errors by improving the accuracy of facial landmark location, which reduces the misjudgment rate of real faces. Compared with VGG19, XceptionNet and Dlib-SVM methods, this proposed method has the Area Under Receiver Operating Characteristic curve (AUC) increased by 4.48 to 32.96 percentage points and Average Precision (AP) increased by 4.26 to 33.12 percentage points on frontal faces; and has the AUC increased by 10.56 to 30.75 percentage points and AP increased by 7.42 to 42.45 percentage points on faces with angles and occlusion.
Reference | Related Articles | Metrics
Adaptive network transmission mechanism based on forward error correction
ZHU Yongjin, YIN Fei, DOU Longlong, WU Kun, ZHANG Zhiwei, QIAN Zhuzhong
Journal of Computer Applications    2021, 41 (3): 825-832.   DOI: 10.11772/j.issn.1001-9081.2020060948
Abstract347)      PDF (1133KB)(555)       Save
Aiming at the performance degradation of transmission performance of Transmission Control Protocol (TCP) in wireless network caused by the loss packet retransmission mechanism triggered by packet loss, an Adaptive transmission mechanism based on Forward Error Correction (AdaptiveFEC) was proposed. In the mechanism, the transmission performance of TCP was improved by the avoidance of triggering TCP loss packet retransmission mechanism, which realized by reducing data segment loss with forward error correction. Firstly, the optimal redundant segment ratio in current time was selected according to the current network status and the data transmission characteristics of the current connection. Then, the network status was estimated by analyzing the data segment sequence number in the TCP data segment, so that the redundant segment ratio was dynamically updated according to the network. Large number of experiment results show that, in the transmission environment with a round-trip delay of 20 ms and a packet loss rate of 5%, AdaptiveFEC can increase the transmission rate of TCP connection by 42% averagely compared to static forward error correction mechanism, and the download speed can be twice as much as the original speed with the proposed mechanism applied to file download applications.
Reference | Related Articles | Metrics
Early identification and prediction of abnormal carotid arteries based on variational autoencoder
HUANG Xiaoxiang, HU Yongmei, WU Dan, REN Lijie
Journal of Computer Applications    2021, 41 (10): 3082-3088.   DOI: 10.11772/j.issn.1001-9081.2020101695
Abstract326)      PDF (662KB)(263)       Save
Carotid artery stenosis, Carotid Intima Media Thickness (CIMT) or carotid artery plaque may lead to stroke. For large-scale preliminary screening of stroke, an improved Variational AutoEncoder (VAE) based on medical data was proposed to predict and identify abnormal carotid arteries. Firstly, for the missing values in medical data, K-Nearest Neighbor ( KNN), Mixture of mean, mode and KNN (M KNN) method and improved VAE were respectively used to impute the missed values to obtain the complete dataset, improving the application range of the data. Secondly, the feature attributes were analyzed and the features were ranked in order of importance. Thirdly, four supervised algorithms, Logistic Regression (LR), Support Vector Machine (SVM), Random Forest (RF) and eXtreme Gradient Boosting Tree (XGBT), were combined with Genetic Algorithm (GA) to build the abnormal carotid artery identification models. Finally, based on the improved VAE, a semi-supervised abnormal carotid artery prediction model was built. Compared to the performance of baseline model, the performance of the semi-supervised model based on the improved VAE improves significantly with sensitivity of 0.893 8, specificity of 0.927 2, F1-measure of 0.910 5 and classification accuracy of 0.910 5. Experimental results show that this semi-supervised model can be used to identify the abnormal carotid arteries and thus serves as a tool to recognize high-risk groups of stroke, preventing and reducing the occurrence of stroke.
Reference | Related Articles | Metrics
Verification of control-data plane consistency in software defined network
ZHU Mengdi, SHU Yong’an
Journal of Computer Applications    2020, 40 (6): 1751-1754.   DOI: 10.11772/j.issn.1001-9081.2019101712
Abstract328)      PDF (497KB)(427)       Save
Aiming at the problem of inconsistency between the network policies of control layer and flow rules of data layer in Software Defined Network (SDN), a detection model for Verifying control-data plane Consistency (VeriC) was proposed. Firstly, the function of the packet processing subsystem was realized through the VeriC pipeline on the switch, and the function is sampling the data packet, and updating the tag field in the sampled data packet when the packet passing through the switch. Then, after the update was completed, the tag values were sent to the server and stored in the real tag value group. Finally, the real tag value group and the stored correct tag value group were sent to the verification subsystem to perform the consistency verification. As it failed, the two groups of tag values were sent to the localization subsystem to locate the switch with flow table entry error. A fat tree topology with 4 Pod was generated by ns-3 simulator, where the accuracies of consistency detection and faulty machine location of VeriC are higher than those of VeriDP, and the overall performance of VeriC is higher than that of 2MVeri model. Theoretical analysis and simulation results show that VeriC detection model can not only perform consistency detection and accurately locate the faulty switch, but also take shorter time to locate the faulty switch compared to other comparison detection models.
Reference | Related Articles | Metrics
Outdoor weather image classification based on feature fusion
GUO Zhiqing, HU Yongwu, LIU Peng, YANG Jie
Journal of Computer Applications    2020, 40 (4): 1023-1029.   DOI: 10.11772/j.issn.1001-9081.2019081449
Abstract644)      PDF (1685KB)(500)       Save
Weather conditions have great influence on the imaging performance of outdoor video equipment. In order to achieve the adaptive adjustment of imaging equipment in inclement weather,so as to improve the effect of intelligent monitoring system,by considering the characteristics that the traditional weather image classification methods have bad classification effect and cannot classify similar weather phenomena,and aiming at the low accuracy of deep learning methods on the weather recognition,a feature fusion model combining traditional methods with deep learning methods was proposed. In the fusion model,four artificially designed algorithms were used to extract traditional features,and AlexNet was used to extract deep features. The eigenvectors after fusion were used to discriminate the image weather conditions. The accuracy of the fusion model on a multi-background dataset reaches 93. 90%,which is better than those of three common methods for comparison,and also performs well on the Average Precision(AP)and Average Recall(AR)indicators;the model has the accuracy on a single background dataset reached 96. 97%,has the AP and AR better than those of other models,and can well recognize weather images with similar features. The experimental results show that the proposed feature fusion model can combine the advantages of traditional methods and deep learning methods to improve the accuracy of existing weather image classification methods,as well as improve the recognition rate under weather phenomena with similar features.
Reference | Related Articles | Metrics
Improved decision diagram for attribute-based access control policy evaluation and management
LUO Xiaofeng, YANG Xingchun, HU Yong
Journal of Computer Applications    2019, 39 (12): 3569-3574.   DOI: 10.11772/j.issn.1001-9081.2019040603
Abstract412)      PDF (952KB)(283)       Save
The Multi-data-type Interval Decision Diagram (MIDD) approach express and deal with the critical marks of attribute incorrectly, while express and deal with the obligations and advices ambiguously, resulting in the inconformity of node expression and the increase of processing complexity. Aiming at these problems, some improvements and expansions were proposed. Firstly, the graph nodes in MIDD with entity attribute as the unit were converted to the nodes with element as the unit, so that the elements of attribute-based access control policy were able to be represented accurately, and the problem of dealing with the critical marks was solved. Secondly, the obligations and advices were employed as elements, and were expressed by nodes. Finally, the combining algorithm of rule and policy was added to the decision nodes, so that the Policy Decision Point (PDP) was able to use it to make decision on access requests. The analysis results show that the spatio-temporal complexity of the proposed approach is similar to that of the original approach. The result of the two approaches' comparative simulation show that when each attribute has only one subsidiary attribute (the most general application situation), the average decision time difference per access request of the two approaches is at 0.01 μs level. It proves the correctness of the complexity analysis, indicating the performances of the two approaches are similar. Simulation on the number of subsidiary attributes showed that, even with 10 subsidiary attributes (very rare in practical applications), the average decision time difference of the two approaches is at the same order of magnitude. The proposed approach not only ensures the correctness, consistency and convenience of the original approach, but also extends its application scope from eXtensible Access Control Markup Language (XACML) policy to general attribute-based access control policies.
Reference | Related Articles | Metrics
Energy consumption of WSN with multi-mobile sinks considering QoS
WANG Manman, SHU Yong'an
Journal of Computer Applications    2018, 38 (3): 758-762.   DOI: 10.11772/j.issn.1001-9081.2017082130
Abstract470)      PDF (811KB)(412)       Save
Concerning the excessively high energy consumption, long transmission delay and poor data integrity of nodes in Wireless Sensor Network (WSN),a routing algorithm named MSTSDI (Multi-Sink Time Sensitive Data Integrity) based on multi-mobile sinks considering Quality of Service (QoS) was proposed. Firstly, The density of the nodes was determined by the strength of the signal received from the base station,and the WSN was divided into autonomous areas according to the K-means theory. Secondly, a mobile sink was assigned to each autonomous area, and the trajectory of the mobile sink was determined by using Support Vector Regression (SVR). Finally, the depth and queue potential fields were introduced to transmit data packets with high sensitivity and high data integrity through Improved-IDDR (Integrity and Delay Differentiated Routing) algorithm. Theoretical analysis and simulation results showed that compared with GLRM (Grid-based Load-balanced Routing Method) algorithm and LEACH (Low Energy Adaptive Clustering Hierarchy protocol) algorithm, the energy consumption of routing strategy improved-IDDR was decreased by 21.2% and 23.7%; and the end-to-end delay of the algorithm was decreased by 15.23% and 17.93%; the data integrity was better. Experimental results showed that MSTSDI can effectively improve the performance of the system in real networks.
Reference | Related Articles | Metrics
Abnormal user detection in enterprise network based on graph analysis and support vector machine
XU Bing, GUO Yuanbo, YE Ziwei, HU Yongjin
Journal of Computer Applications    2018, 38 (2): 357-362.   DOI: 10.11772/j.issn.1001-9081.2017081951
Abstract547)      PDF (971KB)(414)       Save
In the enterprise network, if the internal attacker obtains the user's identity authentication information, his behavior will be very difficult to distinguish with the normal user. The current research on the abnormal user detection method in enterprise network is relatively simple and the detection rate is low. The user's authentication activity information directly reflects the user's interaction with various resources or personnel in the network. Based on this, a new abnormal user detection method by using user authentication activity information was proposed. The user's authentication activity was used to generate the user authentication graph, and then the attributes in the authentication graph were extracted based on the graph analysis method, such as the size of the largest connected components of the graph and the number of isolated certificates. These attributes reflect the user's authentication behavioral characteristics in the enterprise network. Finally, a supervised Support Vector Machine (SVM) was used to model the extracted graph attributes to indirectly identify and detect abnormal users in the network. After extracting the user graph vector, the training set and the test set, the penalty parameter and the kernel function were analyzed by taking different values. Through the adjustment of these parameters, the recall, accuracy and F1-Score of the propsed method have reached more than 80%. The experimental results show that the proposed method can effectively detect abnormal users in the enterprise network.
Reference | Related Articles | Metrics
Load balancing scheme based on multi-objective optimization for software defined network
LIU Biguo, SHU Yong'an, FU Yinghui
Journal of Computer Applications    2017, 37 (6): 1555-1559.   DOI: 10.11772/j.issn.1001-9081.2017.06.1555
Abstract494)      PDF (966KB)(602)       Save
In order to solve the problem of load balancing in Software Defined Network (SDN) control plane, a Dynamic Switch Migration Algorithm based on Multi-objective optimization (M-DSMA) was proposed. Firstly, the mapping relationship between the switch and the controller was transformed into 0-1 matrix optimization problem. Then, the two conflicting objective functions were simultaneously optimized and controlled by the multi-objective genetic algorithm based on Non-dominated Sorting Genetic Algorithm-Ⅱ (NSGA-Ⅱ), one was the plane load balancing degree and another one was the communication overhead generated by switch migration. In the process of multi-objective optimization, the individuals were selected by using the fitness function for crossover and mutation, and then a rapid non-dominated sorting method was used to elite strategy in population. The next generation population was generated and the whole population was continually evolved, thus the global optimal solution was searched. The simulation results show that, the proposed M-DSMA can effectively balance the control plane load, and reduce the communication overhead by 30% to 50% compared with Dynamic Switch Migration Algorithm (DSMA). The proposed algorithm has the significant advantages in improving the control plane scalability.
Reference | Related Articles | Metrics
Load balancing mechanism for hierarchical controllers based on software defined network
ZHU Shike, SHU Yong'an
Journal of Computer Applications    2017, 37 (12): 3351-3355.   DOI: 10.11772/j.issn.1001-9081.2017.12.3351
Abstract495)      PDF (1030KB)(574)       Save
Aiming at the problems that the communication overhead between controllers is large and the controller throughput is low during the load balancing process of multi-controller in Software Defined Network (SDN), a hierarchical controller load balancing mechanism was proposed. Based on the hierarchical architecture, the load balancing was completed through the collaboration of super controller and domain controller, and the predefined load threshold was used to reduce the message exchange overhead between domain controller and super controller. At the same time, the most overloaded domain controller was effectively selected. A plurality of switches conforming to the migration standard were selected from the switches controlled by the most overload domain controller. Simultaneously the selected switches were respectively migrated to a plurality of domain controllers with high overall performance, which solving the problem of load imbalance among multiple controllers. The experimental results showed that, compared with the COoperative Load BAlancing Scheme for hierarchical SDN controllers (COLBAS) and the Dynamic and Adaptive algorithm for controller Load Balancing (DALB), the number of messages in the proposed mechanism system was reduced by about 79 percentage points, and the throughput of the proposed system was about 8.57% higher than DALB and 52.01% higher than COLBAS. The proposed mechanism can effectively reduce the communication overhead and improve the system throughput to achieve a better load balancing effect.
Reference | Related Articles | Metrics
Star join algorithm based on multi-dimensional Bloom filter in Spark
ZHOU Guoliang, SA Churila, ZHU Yongli
Journal of Computer Applications    2016, 36 (2): 353-357.   DOI: 10.11772/j.issn.1001-9081.2016.02.0353
Abstract923)      PDF (765KB)(889)       Save
To meet the high performance analysis requirements for real-time data in On-Line Analytical Processing (OLAP) system, a new star join algorithm which is suitable for Spark platform was proposed based on Multi-Dimensional Bloom Filter (MDBF), namely SMDBFSJ (Spark Multi-Dimensional Bloom Filter Star Join). First of all, MDBF was built according to the dimension tables and broadcasted to all the nodes based on the feature of small size. Then the fact table was filtered completely on the local node, and there was no data movement between nodes. Finally, the filtered fact table and dimension tables were joined using repartition join model to get the final result. SMDBFSJ algorithm avoides the data moving of fact table, reduces the size of broadcasting data using MDBF, as well as fully combines the advantages of broadcast join and repartition join. The experimental results prove the validity of SMDBFSJ algorithm, in stand-alone and cluster environments. SMDBFSJ algorithm can obtain about three times of performance improvement compared with repartition join in Spark.
Reference | Related Articles | Metrics
Parallel cube computing in Spark
SA Churila, ZHOU Guoliang, SHI Lei, WANG Liuwang, SHI Xin, ZHU Yongli
Journal of Computer Applications    2016, 36 (2): 348-352.   DOI: 10.11772/j.issn.1001-9081.2016.02.0348
Abstract477)      PDF (769KB)(961)       Save
In view of the poor real-time response capability of traditional OnLine Analytical Processing (OLAP) when processing big data, how to accelerate computation of data cubes based on Spark was investigated, and a memory-based distributed computing framework was put forward. To improve parallelism degree and performance of Bottom-Up Construction (BUC), a novel algorithm for computation of data cubes was designed based on Spark and BUC, referred to as BUCPark (BUC on Spark). Moreover, to avoid the expansion of iterative data cube in memory, BUCPark was fruther improved to LBUCPark (Layered BUC on Spark) which could take full advantage of reused and shared memory mechanism. The experimental results show that LBUCpark outperforms BUC and BUCPark algorithms in terms of computing performace, and it is capable of computing data cube efficiently in big data era.
Reference | Related Articles | Metrics
Parallel fuzzy C-means clustering algorithm in Spark
WANG Guilan, ZHOU Guoliang, SA Churila, ZHU Yongli
Journal of Computer Applications    2016, 36 (2): 342-347.   DOI: 10.11772/j.issn.1001-9081.2016.02.0342
Abstract1114)      PDF (901KB)(1347)       Save
With the growing data volume and timeliness requirement, the clustering algorithms need to be adaptive to big data and higher performance. A new algorithm named Spark Fuzzy C-Means (FCM) was proposed based on Spark distributed in-memory computing platform. Firstly, the matrix was partitioned into vector set horizontally and distributedly stored, which meant different vectors were distributed in different nodes. Then based on the characteristics of FCM algorithm, matrix operations were redesigned considering distributed storage and cache sensitivity, including multiplication, addition and transpose. Finally, Spark-FCM algorithm which combined with matrix operations and Spark platform was implemented. The primary data structures of the algorithm adopted distributed matrix storage with fewer moving data between nodes and distributed computing in each step. The test results in stand-alone and cluster environments show that Spark-FCM has good scalability and can adjust to large-scale data sets, the performance and the size of data shows a linear relationship, and the performance in cluster environment is 2 to 3 times higher than that in stand-alone.
Reference | Related Articles | Metrics
Wireless communication security strategy based on differential flag byte
TANG Shang, LI Yonggui, ZHU Yonggang
Journal of Computer Applications    2016, 36 (1): 212-215.   DOI: 10.11772/j.issn.1001-9081.2016.01.0212
Abstract312)      PDF (757KB)(302)       Save
Since the computational complexity of public key cryptography is high, and Power Delay Profile (PDP) model is limited by the distance between the attacker and the user, a wireless communication security strategy based on Differential Flag Byte (DFB) was proposed in the identification and defense of impersonation attack. Meanwhile, the equation to generate the DFB was given. The strategy utilized the transmission data information to generate the DFB equation, establishing the correlation that current flag byte of transmission data frame was determined by the relevant parameter of last frame. Finally, receiving terminal identified attack by testing and verifying the DFB received from the data frame with threshold decision. Through theoretical analysis, DFB could prevent recurrent impersonation attack, when the attacker knew the communicational parameter. Meanwhile, the attacker's effective attack time was shorter, and the attack cycle was longer. And the attacker was limited to a finite ellipse in space. Simulation analysis was carried out with a simple DFB at the end. The results show that wireless communication based on the simple DFB strategy can identify and defense impersonation attack by setting the appropriate threshold, when the communication system's Signal-to-Noise Ratio (SNR) was above -4 dB.
Reference | Related Articles | Metrics
Real-time clustering for massive data using Storm
WANG Mingkun YUAN Shaoguang ZHU Yongli WANG Dewen
Journal of Computer Applications    2014, 34 (11): 3078-3081.   DOI: 10.11772/j.issn.1001-9081.2014.11.3078
Abstract303)      PDF (611KB)(756)       Save

In order to improve the real-time response ability of massive data processing, Storm distributed real-time platform was introduced to process data mining, and the Density-Based Spatial Clustering of Application with Noise (DBSCAN) clustering algorithm based on Storm was designed to deal with massive data. The algorithm was divided into three main steps: data collection, clustering analysis and result output. All procedures were realized under the pre-defined component of Storm and submitted to the Storm cluster for execution. Through comparative analysis and performance monitoring, the system shows the advantages of low latency and high throughput capacity. It proves that Storm suits for real-time processing of massive data.

Reference | Related Articles | Metrics
Improved single pattern matching algorithm based on Sunday algorithm
ZHU Yongqiang QIN Zhiguang JIANG Xue
Journal of Computer Applications    2014, 34 (1): 208-212.   DOI: 10.11772/j.issn.1001-9081.2014.01.0208
Abstract455)      PDF (696KB)(464)       Save
When Sunday algorithm is applied into the Chinese version of Unicode, there are some problems. On one hand, it causes the expansion of space if using Chinese characters directly to generate a failed jump table. On the other hand, it can reduce the space consumption at the cost of matching speed when Chinese characters are split into two bytes. Concerning the degradation of time performance produced by applying Sunday algorithm to the character-splitting environment of Chinese version of Unicode, in combination with internal relevance of Chinese unit in Unicode, the improved algorithm in this paper optimized the auxiliary jump table and matching rules in original Sunday algorithm in the character-splitting environment. Consequently, the proposed algorithm not only solves the problem of space expansion, but also improves time performance of Sunday algorithm in this environment. Finally, the improved time and space performance of the algorithm gets proved via simulation.
Related Articles | Metrics
Normalized cumulant blind equalization algorithm based on oversampling technology
ZHANG Xiaoqin HU Yongsheng ZHANG Liyi
Journal of Computer Applications    2013, 33 (09): 2463-2466.   DOI: 10.11772/j.issn.1001-9081.2013.09.2463
Abstract570)      PDF (573KB)(379)       Save
The traditional baud spaced equalizer is used to compensate the aliasing frequency response characteristics of the received signal, but it cannot compensate the channel distortion. Concerning this problem, a normalized cumulant blind equalization algorithm based on oversampling technology was proposed. The received signal was oversampled firstly, and then the variable step-size was used to adaptively adjust weight coefficients of equalizer. It can not only avoid falling into local optimum, but also compensate the channel distortion effectively. The simulation results show that the algorithm can effectively speed up the convergence, and reduce the steady residual error.
Related Articles | Metrics
k-nearest neighbors classifier over manifolds
WEN Zhi-qiang HU Yong-xiang ZHU Wen-qiu
Journal of Computer Applications    2012, 32 (12): 3311-3314.   DOI: 10.3724/SP.J.1087.2012.03311
Abstract859)      PDF (777KB)(531)       Save
For resolving the problem of the existing noise sample and large number of dimensions, the k-nearest neighbors classifier over manifolds was presented in this paper. Firstly the classic k-nearest neighbors was extended by Bayes theorem and local joint probability density was estimated by kernel density estimation in classifier. In addition, after building the noise sample model, an objective function was defined via improved marginal intrinsic graph and its weight matrix for searching the optimal dimension reduction mapping matrix. At last, details about k-nearest neighbors algorithm over manifolds were provided. The experimental results demonstrate that the presented method has lower classification error rate than six kinds of classic methods in most cases on twelve data sets.
Related Articles | Metrics
Hardware acceleration based on IMPULSE C of ECC over GF(P)
CUI Qiang-qiang JIN Tong-biao ZHU Yong
Journal of Computer Applications    2011, 31 (09): 2385-2388.   DOI: 10.3724/SP.J.1087.2011.02385
Abstract1142)      PDF (554KB)(419)       Save
Elliptic Curve Cryptography (ECC) based on GF(P) was studied deeply and programmed in IMPULSE C code. Firstly, a parallelization technique was proposed to speed up modular addition and modular doubling in standard projective coordinates, and a further parallelization was given using complier while programming. Secondly, according to the characteristics of IMPULSE C, a rational distribution of ECC algorithm was made. In this design, the complicated point multiplication with a large amount of calculation was regarded as hardware part, which was implemented and accelerated through Field Programmable Gate Array (FPGA). The ECC protocol was regarded as software part and implemented on CPU, and VHDL code was generated for hardware part. The IMPULSE C code was simulated by CoDeveloper and the VHDL code was analyzed and synthesized by Xilinx ISE 10.1.On the basis of the previous work, the design has been prototyped on a Xilinx Virtex-5 xc5vfx70t FPGA board. The experimental result indicate that the proposed method can deal with P-192 point multiplication within 2.9 ms at 133 MHz clock, and shows better throughput compared to the exiting reported realization.
Related Articles | Metrics
Line drawing algorithm based on pixel chain
Xiao-lin ZHU Yong CAI Jiang-sheng ZHANG
Journal of Computer Applications    2011, 31 (04): 1057-1061.   DOI: 10.3724/SP.J.1087.2011.01057
Abstract1157)      PDF (669KB)(403)       Save
In order to increase the efficiency of the line drawing algorithm when the slope of the line is greater than 0.5, a line drawing algorithm based on pixel chain was proposed. A straight line was treated as an aggregation of several horizontal pixel chains or diagonal pixel chains. An algorithm of line drawing in a reverse direction, which was similar to Bresenham algorithm, was introduced, by which the slope between 0.5 and 1 was converted to that between 0 and 0.5 while generating line. One pixel chain was generated by one judgment. The simulation results show the straight line generated by new algorithm is as same as that generated by Bresenham algorithm, but the calculation is greatly reduced. The new algorithm has generated two integer arithmetic: addition and multiplication, and it is suitable for hardware implementation. The generation speed of the new algorithm is 4 times of Bresenham algorithm with the same complexity of design.
Related Articles | Metrics
Curve watermarking technique for protecting copyright of digital maps
ZENG Hua-Fei HU Yong-Jian
Journal of Computer Applications   
Abstract1639)      PDF (723KB)(1097)       Save
In order to protect the copyright of digital maps, a curvebased watermarking technique for fingerprinting digital maps was proposed. First the embeddable sample points were selected according to their curvatures, and then the spread spectrum based watermark sequence (also called fingerprint sequence) was embedded into the coordinates of the embeddable sample points. To achieve high quality watermarked curve and robust watermark, the Bezier segments were used to reconstruct the watermarked curve. Our watermarked curve suffers smaller embedding distortions and is robust against common geometrical distortions (e.g., translation, rotation, and scaling), collusions, and printingscaling attacks.
Related Articles | Metrics
Modeling security software architecture based on process algebra
GAN Hou-yong,WU Guo-qing,HU Yong-tao
Journal of Computer Applications    2005, 25 (12): 2811-2813.  
Abstract1382)      PDF (570KB)(1053)       Save
On the basis of analyzing the security of software architecture model based on process algebra,compatibility check and interoperability check were extended from single software architectures to architectural styles,and software architecture description language was expanded.Software architecture can be modeled not only through a family of sequential process algebras terms,but also through an invocation of a previously defined architectural type.This method was introduced through an example.
Related Articles | Metrics
Information processing of target redundancy based on adaptive Fuzzy C-Means clustering analysis
LI Wei-min,ZHU Yong-feng,FU Qiang
Journal of Computer Applications    2005, 25 (04): 949-951.   DOI: 10.3724/SP.J.1087.2005.0949
Abstract1023)      PDF (136KB)(861)       Save
For the plots data outputted by radar receiver, the states of plots spread and the forming reasons of target redundancy phenomenon was discussed. And the Adaptive Fuzzy C-Means Clustering(AFCMC) algorithm which was used in agglomerate processing for the detected plots was proposed. This algorithm has provided a way for target redundancy processing. The simulation result has verified the validity of it.
Related Articles | Metrics
Research on the model of smart meeting room with context-aware capacity
MAN Jun-feng,JIN Ke-yin,HU Yong-xiang
Journal of Computer Applications    0, (): 2957-2960.  
Abstract1502)      PDF (826KB)(1042)       Save
Taking advantages of the technologies of multi-agent system,Semantic Web,context-aware and logical inference,consulting earlier pervasive computing system, a new model of Smart Meeting Room(SMR) was presented.SMR used Ontology Web Language-OWL to support knowledge share and context reasoning,used logical inferences to detect and resolve inconsistent context knowledge,and provided users with a policy language to control their private information.
Related Articles | Metrics